Adversarial Training Reduces Information and Improves Transferability

نویسندگان

چکیده

Recent results show that features of adversarially trained networks for classification, in addition to being robust, enable desirable properties such as invertibility. The latter property may seem counter-intuitive it is widely accepted by the community classification models should only capture minimal information (features) required task. Motivated this discrepancy, we investigate dual relationship between Adversarial Training and Information Theory. We can improve linear transferability new tasks, from which arises a trade-off representations accuracy on source validate our employing robust CIFAR-10, CIFAR-100 ImageNet several datasets. Moreover, reduces Fisher about input weights task, provide theoretical argument explains invertibility deterministic without violating principle minimality. Finally, leverage insights remarkably quality reconstructed images through inversion.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Perceptual-cognitive training improves biological motion perception: evidence for transferability of training in healthy aging.

In our everyday life, processing complex dynamic scenes such as crowds and traffic is of critical importance. Further, it is well documented that there is an age-related decline in complex perceptual-cognitive processing, which can be reversed with training. It has been suggested that a specific dynamic scene perceptual-cognitive training procedure [the three-dimensional multiple object trackin...

متن کامل

Understanding and Enhancing the Transferability of Adversarial Examples

State-of-the-art deep neural networks are known to be vulnerable to adversarial examples, formed by applying small but malicious perturbations to the original inputs. Moreover, the perturbations can transfer across models: adversarial examples generated for a specific model will often mislead other unseen models. Consequently the adversary can leverage it to attack deployed systems without any ...

متن کامل

Blocking Transferability of Adversarial Examples in Black-Box Learning Systems

Advances in Machine Learning (ML) have led to its adoption as an integral component in many applications, including banking, medical diagnosis, and driverless cars. To further broaden the use of ML models, cloud-based services offered by Microsoft, Amazon, Google, and others have developed ML-as-a-service tools as black-box systems. However, ML classifiers are vulnerable to adversarial examples...

متن کامل

Improving Transferability of Adversarial Examples with Input Diversity

Though convolutional neural networks have achieved stateof-the-art performance on various vision tasks, they are extremely vulnerable to adversarial examples, which are obtained by adding humanimperceptible perturbations to the original images. Adversarial examples can thus be used as an useful tool to evaluate and select the most robust models in safety-critical applications. However, most of ...

متن کامل

Modelling nonlinear information behaviour: transferability and progression

Purpose: This paper reports on a project aimed at moving Foster’s Nonlinear Model of Information Seeking Behaviour forwards from an empirically based model focussed on one setting and towards one that is robustly transferable and enables testing of the model in other information seeking situations. Methodology: The method utilised recoding of the original dataset, comparison of code decisions, ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2021

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v35i3.16371